Testing Strategies

Rapid overview

Testing Strategies for High-Performance, Highly Available Systems

Use these notes to articulate how you design and execute tests that protect performance, availability, and correctness. Keep them alongside the core concepts cheat sheet and tailor examples to your services.

---

Unit Tests

```csharp public class BalanceServiceTests { private readonly Mock _store = new(); private readonly BalanceService _sut;

  • Purpose: Validate a single class/function in isolation with deterministic inputs/outputs.
  • Isolation:
  • Mock external dependencies (I/O, network, time) with interfaces and test doubles.
  • Use in-memory fakes for lightweight state (e.g., InMemoryRepository) but prefer mocks for behavior verification.
  • Patterns:
  • Arrange-Act-Assert (AAA): Make the phases explicit; minimize setup repetition with builders/AutoFixture.
  • Given-When-Then naming: GivenHealthyAccount_WhenWithdraw_ThenBalanceUpdated for intent clarity.
  • Table-driven tests: Iterate over scenarios via Theory/InlineData in xUnit to keep cases compact.
  • xUnit + Moq + Shouldly example:

public BalanceServiceTests() { _sut = new BalanceService(_store.Object); }

[Theory] [InlineData(100, 40, 60)] [InlineData(50, 25, 25)] public async Task GivenBalance_WhenWithdraw_ThenBalanceUpdated(decimal starting, decimal debit, decimal expected) { // Arrange _store.Setup(s => s.GetAsync("id", It.IsAny())) .ReturnsAsync(new Account("id", starting));

// Act await _sut.WithdrawAsync("id", debit, CancellationToken.None);

// Assert _store.Verify(s => s.SaveAsync(It.Is(a => a.Balance == expected), It.IsAny())); _sut.LastLatencyMs.ShouldBeLessThan(5); // Cheap guardrail for perf-sensitive code paths } } ```

  • Performance-aware design:
  • Avoid sleeping; use TestScheduler/ManualResetEventSlim for timing-sensitive logic.
  • Keep allocations predictable—reuse fixture data/builders instead of newing objects per assertion.
  • Target micro-benchmarks separately with BenchmarkDotNet rather than overloading unit tests.
  • Reliability:
  • No hidden global state; reset static caches/singletons between runs.
  • Keep unit tests idempotent and order-independent.

Integration Tests

```csharp public class HealthEndpointTests : IClassFixture> { private readonly HttpClient _client;

  • Purpose: Validate real wiring: DI container, middleware, persistence, messaging, and observability hooks.
  • Environment strategy:
  • Run against ephemeral infra (Testcontainers/Docker Compose) with realistic versions of databases/queues.
  • Seed data via migrations + fixtures; tear down cleanly to avoid cross-test coupling.
  • Use unique resource names (DB names, queues) per test run to enable parallel execution.
  • Patterns:
  • WebApplicationFactory/MinimalApiFactory: Spin up APIs in-memory with production middleware, swapping only endpoints you must stub (e.g., external HTTP clients).
  • Contract tests: Validate message schemas and HTTP contracts against consumer/provider expectations.
  • Idempotency checks: Re-run the same operation twice and assert consistent results to mirror at-least-once delivery.
  • Minimal integration test example (xUnit + WebApplicationFactory + Shouldly):

public HealthEndpointTests(WebApplicationFactory factory) { _client = factory.CreateClient(new WebApplicationFactoryClientOptions { AllowAutoRedirect = false }); }

[Fact] public async Task GetHealth_ReturnsOkAndBudgetedLatency() { var stopwatch = ValueStopwatch.StartNew(); var response = await _client.GetAsync("/health"); var elapsedMs = stopwatch.GetElapsedTime().TotalMilliseconds;

response.StatusCode.ShouldBe(HttpStatusCode.OK); elapsedMs.ShouldBeLessThan(50, "health endpoints must stay fast to avoid liveness probe churn"); } } ```

  • Performance & HA focus:
  • Assert on latency budgets (e.g., middleware response times) with histogram/percentile metrics exposed in tests.
  • Simulate failure modes: kill containers, drop network, poison queue messages, exhaust connection pools—verify graceful degradation and recovery.
  • Validate circuit breakers, bulkheads, and timeouts are configured with realistic thresholds.

Cross-Cutting Testing Practices

  • Test data discipline: Centralize builders/fixtures to avoid duplication and to make hot-path payloads realistic (size, shape, field optionality).
  • Observability hooks: Assert logs/metrics/traces for key scenarios (success, validation errors, retries). Use in-memory exporters for OpenTelemetry.
  • Deterministic time & randomness: Inject clocks/Random seeds; freeze time in tests to avoid flakiness.
  • Parallelism: Mark tests Collection-safe; isolate shared resources to allow high-concurrency runs in CI without interference.
  • CI pipeline:
  • Run unit tests fast on every push; gate integration/system tests on main merge or nightly.
  • Capture artifacts (logs, coverage, traces) to speed triage when failures occur.
  • Coverage mindset: Optimize for risk: critical paths (auth, payments, risk controls), failure handling, and regression-prone areas get deeper coverage.

When Discussing in an Interview

  • Narrative: Outline pyramid strategy—fast unit tests, targeted integration, and a few end-to-end paths covering the golden user journeys.
  • Performance posture: Emphasize how tests enforce latency/error budgets and protect against resource exhaustion (threads, sockets, DB connections).
  • Availability posture: Highlight chaos/failover scenarios you automate (leader election, connection drop, retry storms) and how you keep tests isolated and repeatable.
  • Tooling: Mention xUnit, AutoFixture/FluentAssertions for clarity; Testcontainers/Docker Compose for realistic environments; Polly + OpenTelemetry assertions for resilience.

---

Keep these patterns close to the code you ship—optimize for speed, determinism, and confidence without slowing delivery.

---

Questions & Answers

Q: How do you prevent performance regressions from slipping through unit tests?

A: Keep micro-benchmarks in BenchmarkDotNet projects, but add lightweight latency guards to hot-path unit tests (e.g., ShouldBeLessThan on critical methods) and fail builds on meaningful percentile shifts in CI metrics exports. Use deterministic data builders to avoid noisy allocations that mask regressions.

Q: What is your approach to testing high-availability scenarios in integration tests?

A: Exercise failure modes intentionally: kill containers, drop network connections, or poison queue messages using Testcontainers hooks. Assert that retries, circuit breakers, and bulkheads recover within error budgets, and verify observability signals (logs/metrics/traces) show the expected degradation and recovery steps.

Q: How do you keep integration tests parallelizable without flakiness?

A: Use unique resource identifiers (database names, queue topics, blob prefixes) per test run, isolate shared state through fixtures, and ensure teardown cleans resources. Mark collection fixtures to avoid serial bottlenecks and rely on containerized dependencies to avoid cross-test interference.

Q: When is it appropriate to include chaos testing in CI?

A: Run minimal chaos probes (like restarting a dependency once) on main-branch merges to catch regressions early, but reserve heavier scenarios (multi-node failovers, prolonged network partitions) for nightly or pre-release pipelines to balance feedback speed with stability.

Q: How do you validate observability instrumentation through tests?

A: Attach in-memory exporters for OpenTelemetry during integration tests, trigger key user journeys, and assert on emitted spans/metrics/logs (names, attributes, and error flags). This ensures dashboards and alerts stay trustworthy without requiring external telemetry backends.

Q: How do you keep the test pyramid healthy for high-performance services?

A: Push most coverage into deterministic unit tests, use focused integration tests for DI/middleware/wiring, and reserve a handful of end-to-end tests for golden paths. That keeps feedback fast while still exercising resilience features like retries and telemetry in realistic environments.

Q: Where do contract tests fit into your integration strategy?

A: Use consumer-driven contract tests for HTTP/gRPC/messaging boundaries. They validate payload shapes and behavior without spinning up the entire dependency graph, giving you rapid feedback whenever a producer changes schemas or status codes.

Q: How do you keep test data realistic without becoming brittle?

A: Centralize builders/AutoFixture customizations so payload size/shape mirrors production, randomize optional fields to catch null-handling bugs, and snapshot baseline objects when you need explicit comparisons. Builders live next to the domain so updates ripple automatically.

Q: How do you enforce deterministic time and randomness in tests?

A: Abstract clock/random dependencies (ISystemClock, deterministic Random seeds) and inject test doubles that you can fast-forward. Avoid DateTime.UtcNow or Guid.NewGuid() inside tests; use deterministic sequences so assertions stay stable and failures are reproducible.

Q: What’s your CI strategy for mixing fast and slow test suites?

A: Run unit tests + lightweight integration tests on every PR to keep cycle times low. Schedule heavier suites (full container stacks, chaos scenarios, long-running benchmarks) nightly or before releases, and promote artifacts/logs to speed triage when they fail.